24 research outputs found

    The TAL Effector PthA4 Interacts with Nuclear Factors Involved in RNA-Dependent Processes Including a HMG Protein That Selectively Binds Poly(U) RNA

    Get PDF
    Plant pathogenic bacteria utilize an array of effector proteins to cause disease. Among them, transcriptional activator-like (TAL) effectors are unusual in the sense that they modulate transcription in the host. Although target genes and DNA specificity of TAL effectors have been elucidated, how TAL proteins control host transcription is poorly understood. Previously, we showed that the Xanthomonas citri TAL effectors, PthAs 2 and 3, preferentially targeted a citrus protein complex associated with transcription control and DNA repair. To extend our knowledge on the mode of action of PthAs, we have identified new protein targets of the PthA4 variant, required to elicit canker on citrus. Here we show that all the PthA4-interacting proteins are DNA and/or RNA-binding factors implicated in chromatin remodeling and repair, gene regulation and mRNA stabilization/modification. The majority of these proteins, including a structural maintenance of chromosomes protein (CsSMC), a translin-associated factor X (CsTRAX), a VirE2-interacting protein (CsVIP2), a high mobility group (CsHMG) and two poly(A)-binding proteins (CsPABP1 and 2), interacted with each other, suggesting that they assemble into a multiprotein complex. CsHMG was shown to bind DNA and to interact with the invariable leucine-rich repeat region of PthAs. Surprisingly, both CsHMG and PthA4 interacted with PABP1 and 2 and showed selective binding to poly(U) RNA, a property that is novel among HMGs and TAL effectors. Given that homologs of CsHMG, CsPABP1, CsPABP2, CsSMC and CsTRAX in other organisms assemble into protein complexes to regulate mRNA stability and translation, we suggest a novel role of TAL effectors in mRNA processing and translational control

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Widespread diversity deficits of coral reef sharks and rays

    Get PDF
    A global survey of coral reefs reveals that overfishing is driving resident shark species toward extinction, causing diversity deficits in reef elasmobranch (shark and ray) assemblages. Our species-level analysis revealed global declines of 60 to 73% for five common resident reef shark species and that individual shark species were not detected at 34 to 47% of surveyed reefs. As reefs become more shark-depleted, rays begin to dominate assemblages. Shark-dominated assemblages persist in wealthy nations with strong governance and in highly protected areas, whereas poverty, weak governance, and a lack of shark management are associated with depauperate assemblages mainly composed of rays. Without action to address these diversity deficits, loss of ecological function and ecosystem services will increasingly affect human communities

    An international observational study to assess the impact of the Omicron variant emergence on the clinical epidemiology of COVID-19 in hospitalised patients

    No full text
    Background: Whilst timely clinical characterisation of infections caused by novel SARS-CoV-2 variants is necessary for evidence-based policy response, individual-level data on infecting variants are typically only available for a minority of patients and settings. Methods: Here, we propose an innovative approach to study changes in COVID-19 hospital presentation and outcomes after the Omicron variant emergence using publicly available population-level data on variant relative frequency to infer SARS-CoV-2 variants likely responsible for clinical cases. We apply this method to data collected by a large international clinical consortium before and after the emergence of the Omicron variant in different countries. Results: Our analysis, that includes more than 100,000 patients from 28 countries, suggests that in many settings patients hospitalised with Omicron variant infection less often presented with commonly reported symptoms compared to patients infected with pre-Omicron variants. Patients with COVID-19 admitted to hospital after Omicron variant emergence had lower mortality compared to patients admitted during the period when Omicron variant was responsible for only a minority of infections (odds ratio in a mixed-effects logistic regression adjusted for likely confounders, 0.67 [95% confidence interval 0.61-0.75]). Qualitatively similar findings were observed in sensitivity analyses with different assumptions on population-level Omicron variant relative frequencies, and in analyses using available individual-level data on infecting variant for a subset of the study population. Conclusions: Although clinical studies with matching viral genomic information should remain a priority, our approach combining publicly available data on variant frequency and a multi-country clinical characterisation dataset with more than 100,000 records allowed analysis of data from a wide range of settings and novel insights on real-world heterogeneity of COVID-19 presentation and clinical outcome

    A Bayesian reanalysis of the Standard versus Accelerated Initiation of Renal-Replacement Therapy in Acute Kidney Injury (STARRT-AKI) trial

    No full text
    Background Timing of initiation of kidney-replacement therapy (KRT) in critically ill patients remains controversial. The Standard versus Accelerated Initiation of Renal-Replacement Therapy in Acute Kidney Injury (STARRT-AKI) trial compared two strategies of KRT initiation (accelerated versus standard) in critically ill patients with acute kidney injury and found neutral results for 90-day all-cause mortality. Probabilistic exploration of the trial endpoints may enable greater understanding of the trial findings. We aimed to perform a reanalysis using a Bayesian framework. Methods We performed a secondary analysis of all 2927 patients randomized in multi-national STARRT-AKI trial, performed at 168 centers in 15 countries. The primary endpoint, 90-day all-cause mortality, was evaluated using hierarchical Bayesian logistic regression. A spectrum of priors includes optimistic, neutral, and pessimistic priors, along with priors informed from earlier clinical trials. Secondary endpoints (KRT-free days and hospital-free days) were assessed using zero–one inflated beta regression. Results The posterior probability of benefit comparing an accelerated versus a standard KRT initiation strategy for the primary endpoint suggested no important difference, regardless of the prior used (absolute difference of 0.13% [95% credible interval [CrI] − 3.30%; 3.40%], − 0.39% [95% CrI − 3.46%; 3.00%], and 0.64% [95% CrI − 2.53%; 3.88%] for neutral, optimistic, and pessimistic priors, respectively). There was a very low probability that the effect size was equal or larger than a consensus-defined minimal clinically important difference. Patients allocated to the accelerated strategy had a lower number of KRT-free days (median absolute difference of − 3.55 days [95% CrI − 6.38; − 0.48]), with a probability that the accelerated strategy was associated with more KRT-free days of 0.008. Hospital-free days were similar between strategies, with the accelerated strategy having a median absolute difference of 0.48 more hospital-free days (95% CrI − 1.87; 2.72) compared with the standard strategy and the probability that the accelerated strategy had more hospital-free days was 0.66. Conclusions In a Bayesian reanalysis of the STARRT-AKI trial, we found very low probability that an accelerated strategy has clinically important benefits compared with the standard strategy. Patients receiving the accelerated strategy probably have fewer days alive and KRT-free. These findings do not support the adoption of an accelerated strategy of KRT initiation

    A multi-country analysis of COVID-19 hospitalizations by vaccination status

    No full text
    Background: Individuals vaccinated against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), when infected, can still develop disease that requires hospitalization. It remains unclear whether these patients differ from hospitalized unvaccinated patients with regard to presentation, coexisting comorbidities, and outcomes. Methods: Here, we use data from an international consortium to study this question and assess whether differences between these groups are context specific. Data from 83,163 hospitalized COVID-19 patients (34,843 vaccinated, 48,320 unvaccinated) from 38 countries were analyzed. Findings: While typical symptoms were more often reported in unvaccinated patients, comorbidities, including some associated with worse prognosis in previous studies, were more common in vaccinated patients. Considerable between-country variation in both in-hospital fatality risk and vaccinated-versus-unvaccinated difference in this outcome was observed. Conclusions: These findings will inform allocation of healthcare resources in future surges as well as design of longer-term international studies to characterize changes in clinical profile of hospitalized COVID-19 patients related to vaccination history. Funding: This work was made possible by the UK Foreign, Commonwealth and Development Office and Wellcome (215091/Z/18/Z, 222410/Z/21/Z, 225288/Z/22/Z, and 220757/Z/20/Z); the Bill & Melinda Gates Foundation (OPP1209135); and the philanthropic support of the donors to the University of Oxford's COVID-19 Research Response Fund (0009109). Additional funders are listed in the "acknowledgments" section

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    No full text
    corecore